Automatic first-arrival picking method via intelligent Markov optimal decision processes
نویسندگان
چکیده
Abstract Picking the first arrival is an important step in seismic processing. The large volume of data calls for automatic and objective picking. In this paper, we formulate first-arrival picking as intelligent Markov decision process multi-dimensional feature attribute space. By designing a reasonable model, global optimization carried out reward function space to obtain path with largest cumulative value, achieve purpose automatically up arrival. state-value contains distance-related discount factor γ, which enables pick continuity consider lateral avoid bad trace information data. On basis, method paper further introduces optimized model that fuzzy clustering-based structure-based Gaussian stochastic policy, thereby reducing difficulty design, making more accurately automatically. Testing approach field reveals its properties shows it can arrivals has certain quality control ability, especially energy weak (the signal-to-noise ratio low) or there are adjacent complex waveforms shallow layer.
منابع مشابه
First-Order Markov Decision Processes
Markov Decision Processes (MDPs) [7] have developed lately as a standard method for representing uncertainty in decision-theoretic planning. Traditional MDP solution techniques have the drawback that they require an explicit state space, limiting their applicability to real-world problems due to the large number of world states occurring in such problems. Recent work addresses this drawback via...
متن کاملStochastic Scheduling Games with Markov Decision Arrival Processes
In Hordijk & Koole [4,5] a new type of arrival process, the Markov Decision Arrival Process (MDAP), was introduced which can be used to model certain dependencies between arrival streams and the system at which the arrivals occur. This arrival process was used to solve control problems with several controllers having a common objective, where the output from one controlled node is fed into a se...
متن کاملSolving Markov Decision Processes via Simulation
This chapter presents an overview of simulation-based techniques useful for solving Markov decision problems/processes (MDPs). MDPs are problems of sequential decision-making in which decisions made in each state collectively affect the trajectory of the states visited by the system — over a time horizon of interest to the analyst. The trajectory in turn, usually, affects the performance of the...
متن کاملBounded Parameter Markov Decision Processes Bounded Parameter Markov Decision Processes
In this paper, we introduce the notion of a bounded parameter Markov decision process as a generalization of the traditional exact MDP. A bounded parameter MDP is a set of exact MDPs speciied by giving upper and lower bounds on transition probabilities and rewards (all the MDPs in the set share the same state and action space). Bounded parameter MDPs can be used to represent variation or uncert...
متن کاملLearning Qualitative Markov Decision Processes Learning Qualitative Markov Decision Processes
To navigate in natural environments, a robot must decide the best action to take according to its current situation and goal, a problem that can be represented as a Markov Decision Process (MDP). In general, it is assumed that a reasonable state representation and transition model can be provided by the user to the system. When dealing with complex domains, however, it is not always easy or pos...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Geophysics and Engineering
سال: 2021
ISSN: ['1742-2140', '1742-2132']
DOI: https://doi.org/10.1093/jge/gxab026